Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
准确的牙齿体积分割是计算机辅助牙齿分析的先决条件。基于深度学习的牙齿分割方法已经达到了令人满意的表现,但需要大量的牙齿数据。公开可用的牙科数据是有限的,这意味着无法在临床实践中复制,评估和应用现有方法。在本文中,我们建立了一个3D Dental CBCT数据集Ctooth+,具有22个完全注释的卷和146个未标记的体积。我们进一步评估了基于完全监督的学习,半监督学习和积极学习的几种最先进的牙齿量细分策略,并定义了绩效原则。这项工作为牙齿体积分割任务提供了新的基准,该实验可以作为未来基于AI的牙科成像研究和临床应用开发的基线。
translated by 谷歌翻译
在本文中,我们引入了一个无监督的组织学图像癌症分割框架。该框架涉及一种有效的对比度学习方案,用于提取独特的视觉表示以进行分割。编码器是一个深的U-NET(DU-NET)结构,与正常的U-NET相比包含一个额外的完全卷积层。开发了一种对比学习方案,以解决缺乏对肿瘤边界高质量注释的训练集的问题。采用了一组特定的数据增强技术来提高对比度学习的学习颜色特征的可区分性。使用卷积条件随机场进行平滑和消除噪声。该实验表明,比某些受欢迎的监督网络更好地表明了分割的竞争性能。
translated by 谷歌翻译
3D牙齿分割是计算机辅助牙齿诊断和治疗的先决条件。但是,将所有牙齿区域分割为主观且耗时。最近,基于深度学习的细分方法产生了令人信服的结果并减少了手动注释的工作,但是它需要大量的基础真相进行培训。据我们所知,3D分割研究几乎没有牙齿数据。在本文中,我们建立了带有牙齿金标准的完全注释的锥束计算机断层扫描数据集。该数据集包含22卷(7363片),并带有经验丰富的射线照相解释者注释的精细牙齿标签。为了确保相对的数据采样分布,数据方差包括在牙齿中,包括缺失的牙齿和牙齿修复。在此数据集上评估了几种最新的分割方法。之后,我们进一步总结并应用了一系列基于3D注意的UNET变体以分割牙齿。这项工作为牙齿体积分割任务提供了新的基准。实验证据证明,3D UNET结构的注意力模块增强了牙齿区域中的反应,并抑制背景和噪声的影响。 3D UNET使用SKNET注意模块实现了最佳性能,分别为88.04 \%骰子和78.71 \%IOU。基于注意力的UNET框架的表现优于Ctooth数据集上的其他最新方法。代码库和数据集已发布。
translated by 谷歌翻译
B扫描超声模式中图像的精确和快速分类对于诊断眼部疾病至关重要。然而,在超声波中区分各种疾病仍然挑战经验丰富的眼科医生。因此,在这项工作中开发了一个新颖的对比度截面网络(CDNET),旨在应对超声图像中眼异常的细粒度图像分类(FGIC)挑战,包括眼内肿瘤(IOT),视网膜脱离(RD),后堆肥葡萄球菌(PSS)和玻璃体出血(VH)。 CDNET的三个基本组成部分分别是弱监督的病变定位模块(WSLL),对比度多Zoom(CMZ)策略和超级性对比度分解损失(HCD-LOSS)。这些组件促进了在输入和输出方面的细粒度识别的特征分离。所提出的CDNET在我们的ZJU Ocular Ultrasound数据集(Zjuuld)上进行了验证,该数据集由5213个样品组成。此外,在两个公共且广泛使用的胸部X射线FGIC基准上验证了CDNET的概括能力。定量和定性结果证明了我们提出的CDNET的功效,该CDNET在FGIC任务中实现了最新的性能。代码可在以下网址获得:https://github.com/zeroonegame/cdnet-for-ous-fgic。
translated by 谷歌翻译
Skull stripping is a crucial prerequisite step in the analysis of brain magnetic resonance images (MRI). Although many excellent works or tools have been proposed, they suffer from low generalization capability. For instance, the model trained on a dataset with specific imaging parameters cannot be well applied to other datasets with different imaging parameters. Especially, for the lifespan datasets, the model trained on an adult dataset is not applicable to an infant dataset due to the large domain difference. To address this issue, numerous methods have been proposed, where domain adaptation based on feature alignment is the most common. Unfortunately, this method has some inherent shortcomings, which need to be retrained for each new domain and requires concurrent access to the input images of both domains. In this paper, we design a plug-and-play shape refinement (PSR) framework for multi-site and lifespan skull stripping. To deal with the domain shift between multi-site lifespan datasets, we take advantage of the brain shape prior, which is invariant to imaging parameters and ages. Experiments demonstrate that our framework can outperform the state-of-the-art methods on multi-site lifespan datasets.
translated by 谷歌翻译
强化学习(RL)涉及在未知系统中执行探索性动作。这可以将学习代理放在危险且潜在的灾难性系统中。当前在RL中解决安全学习的方法同时权衡了安全探索和任务实现。在本文中,我们介绍了新一代的RL求解器,这些求解器学会最大程度地减少安全性违规行为,同时在安全政策可以容忍的范围内最大化任务奖励。我们的方法引入了一个新型的两人框架,用于安全RL,称为分配探索安全培训算法(DESTA)。 DESTA的核心是两种自适应代理之间的游戏:安全代理,其任务是最大程度地减少安全违规行为和任务代理,其目标是最大程度地提高环境奖励。具体而言,安全代理可以在任何给定点有选择地控制系统,以防止任务代理在任何其他州自由执行其策略时违反安全性。该框架使安全代理能够学会在培训和测试时间中最大程度地减少未来安全违规行为的某些行动,而任务代理人执行的动作可以最大程度地提高其他任何地方的任务绩效。从理论上讲,我们证明DESTA会汇合到稳定的点,从而最大程度地违反了对预验证的政策的行为。从经验上讲,我们表明了DESTA提高现有政策安全性的能力,其次,当对任务代理和安全代理人同时培训时,构建安全的RL政策。我们展示了DESTA在Lunar Lander和Openai Gym的Frozen Lake中的领先RL方法的出色表现。
translated by 谷歌翻译
在癌症诊断和病理研究中,组织病理学图像的分类均具有巨大的价值。但是,多种原因(例如由放大因素和阶级失衡引起的变化)使其成为一项艰巨的任务,在许多情况下,从图像标签数据集中学习的常规方法在许多情况下都无法令人满意。我们观察到同一类的肿瘤通常具有共同的形态学模式。为了利用这一事实,我们提出了一种方法,该方法可以学习基于相似性的多尺度嵌入(SMSE),以实现非放大依赖性的组织病理学图像分类。特别是,利用了一对损失和三胞胎损失,以从图像对或图像三联体中学习基于相似性的嵌入。学到的嵌入提供了对图像之间相似性的准确测量,这被认为是组织病理学形态比正常图像特征更有效的表示形式。此外,为了确保生成的模型独立于放大,以不同放大因素获取的图像在学习多尺度嵌入过程中同时被馈送到网络中。除了SMSE之外,我们还消除了类不平衡的影响,而不是使用凭直觉丢弃一些简单样品的硬采矿策略,我们引入了新的增强局灶性损失,以同时惩罚硬误分类的样品,同时抑制了容易分类良好的样品。实验结果表明,与以前的方法相比,SMSE改善了乳腺癌和肝癌的组织病理图像分类任务的性能。特别是,与使用传统功能相比,SMSE在Breakhis基准测试中取得了最佳性能,其改善范围从5%到18%。
translated by 谷歌翻译
We study non-parametric estimation of the value function of an infinite-horizon $\gamma$-discounted Markov reward process (MRP) using observations from a single trajectory. We provide non-asymptotic guarantees for a general family of kernel-based multi-step temporal difference (TD) estimates, including canonical $K$-step look-ahead TD for $K = 1, 2, \ldots$ and the TD$(\lambda)$ family for $\lambda \in [0,1)$ as special cases. Our bounds capture its dependence on Bellman fluctuations, mixing time of the Markov chain, any mis-specification in the model, as well as the choice of weight function defining the estimator itself, and reveal some delicate interactions between mixing time and model mis-specification. For a given TD method applied to a well-specified model, its statistical error under trajectory data is similar to that of i.i.d. sample transition pairs, whereas under mis-specification, temporal dependence in data inflates the statistical error. However, any such deterioration can be mitigated by increased look-ahead. We complement our upper bounds by proving minimax lower bounds that establish optimality of TD-based methods with appropriately chosen look-ahead and weighting, and reveal some fundamental differences between value function estimation and ordinary non-parametric regression.
translated by 谷歌翻译